AwesomeTTS Anki add-on: Use Amazon Polly

As its name implies, the AwesomeTTS Anki add-on is awesome. It’s nearly indispensable for language learners.

You can use it in one of two ways:

  1. Subscribe on your own to the text-to-speech services that you plan to use and add those credentials to AwesomeTTS. (à la carte)
  2. Subscribe to the AwesomeTTS+ service and gain access to these services. (prix fixe)

Because I had already subscribed to Google and Azure TTS before AwesomeTTS+ came on the scene, there was no reason for me to pay for the comprehensive prix fixe option. Furthermore, since I’ve never gone above the free tier on any of these services, it makes no sense for me to pay for something I’m already getting for free. For others, the convience of a one-stop-shopping experience probably makes the AwesomeTTS+ service worthwhile.

But the developers have chosen to lock Amazon Polly behind their prix fixe service. As an Amazon Web Services customer already, this makes no sense for me. AWS already knows how to bill me for services; so as with the Google and Azure services I mentioned previously, I have no intention to pay twice. But, as opposed to Google and Azure TTS, those of us who aren’t AwesomeTTS+ subscribers have been locked out of Amazon Polly.

Until now.

The rest of the post is a description of how I bypassed this limitation.

Prerequisites

Before modifying the AwesomeTTS code, you need to get a couple things out of the way first.

AWS user account

First, you will need to be an AWS user. I’m not going to go into depth with this. Start here.

Install the AWS CLI tools

For simplicity, we are going to access the Amazon Polly TTS via the command line toolset provided by AWS. To install, start here. After installing the AWS CLI tools, you will need to add your credentials as described here.

Modify the AwesomeTTS add-on code

On my system, the add-on path is ~/Library/Application Support/Anki2/addons21/1436550454. Within the awesometts directory within that path, you will find the files that you need to modify. Both are in the service directory.

Modifications to languages.py

Find the class definition for StandardLanguage. Change this:

class StandardVoice(Voice):
    def __init__(self, voice_data):
        self.language_code = voice_data['language_code']
        self.voice_key = voice_data['voice_key']
        self.voice_description = voice_data['voice_description']

to this:

class StandardVoice(Voice):
    def __init__(self, voice_data):
        self.language_code = voice_data['language_code']

        # we need the audio_language_code for Amazon Polly service
        self.audio_language_code = voice_data['audio_language_code']
        self.voice_key = voice_data['voice_key']
        self.voice_description = voice_data['voice_description']

This change is required by the Amazon service because in the AWS CLI call we need to specify the language code in the format specified by the audio_language_code key in the voice info.

Modifications to amazon.py

In the original code, they throw an exception when you aren’t an AwesomeTTS+ subscriber. We need to reverse the logic and formulate our own call. To do this, change the original code here:

def run(self, text, options, path):

    if not self.languagetools.use_plus_mode():
        raise ValueError(f'Amazon is only available on AwesomeTTS Plus')

    voice_key = options['voice']
    voice = self.get_voice_for_key(voice_key)

    rate = options['rate']
    pitch = options['pitch']

    self._logger.info(f'using language tools API')
    service = 'Amazon'
    voice_key = voice.get_voice_key()
    language = voice.get_language_code()
    options = {
        'pitch': pitch,
        'rate': rate
    }
    self.languagetools.generate_audio_v2(text, service, 'batch', language, 'n/a', voice_key, options, path)

to:

def run(self, text, options, path):
    # Nope ↓
    # raise ValueError(f'Amazon is only available on AwesomeTTS Plus')
    rate = options['rate']
    pitch = options['pitch']
    voice_key = options['voice']
    voice = self.get_voice_for_key(voice_key)
    if self.languagetools.use_plus_mode():
        self._logger.info(f'using language tools API')
        service = 'Amazon'
        voice_key = voice.get_voice_key()
        language = voice.get_language_code()
        options = {
            'pitch': pitch,
            'rate': rate
        }
        self.languagetools.generate_audio_v2(text, service, 'batch', language, 'n/a', voice_key, options, path)
    else:
        # roll your own, baby; needs AWS CLI installed
        # along with credentials therewith
        lang_code = voice.audio_language_code.replace('_', '-')
        voice_name = voice.get_key()
        (engine, voice_id) = (voice_name['engine'], voice_name['voice_id'])
        cmd = f'aws polly synthesize-speech --engine {engine} --language-code {lang_code}
        		--output-format mp3 --text "{text}" --voice-id {voice_id} "{path}"'
        cmd_list = shlex.split(cmd)
        resp = subprocess.run(cmd_list, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)

We also need to import shlex and subprocess which will be used to setup and execute the shell process that communicates with AWS:

import subprocess
import shlex

After you’ve made these changes, you should now have access to Amazon Polly via the AWS CLI call.

Using fswatch to dynamically update Obsidian documents

Although I’m a relative newcomer to Obsidian, I like what I see, especially the templating and data access functionality - both that provided natively and through the Templater and Dataview plugins.

One missing piece is the ability to dynamically update the YAML-formatted metadata in the frontmatter of Obsidian’s Markdown documents. Several threads on both the official support forums and on r/ObsidianMD have addressed this; and there seems to be no real solution.1 None proposed solution - mainly view Dataview inline queries or Templater dynamic commands - seems to work consistently.

The solution proposed here is a proof-of-concept for an entirely different way of addressing the problem. But it requires getting dirty with more command line programming than many may want to contend with. If you do, the basic idea is to watch the vault directories for changes and update the YAML directly outside of Obsidian.

Use case

I have a YAML field mdate: which is the date last modified. Whenever the file is touched, I would like the mdate: field updated. Here’s a sample of my frontmatter:

---
uid:	20210517060102
cdate:	2021-05-17 06:01 
mdate:  2022-11-18 20:06
type:	zettel
---

Straightforward, right?

Solution

As I was unable to implement a solution inside Obsidian, I turned to fswatch which is a cross-platform filesystem watcher. When certain events occur in a watched directory, it reports those events in user space.

#!/bin/bash

FP="path/to/my/vault"

function update_mdate() {
   FILE="$1"
   if uname | grep -q "Darwin"; then
      MODDATE=$(stat -f "%Sm" -t "%Y-%m-%d %H:%M" "$FILE")
   else
      MODDATE=$(stat -f "-c %y" "$FILE" \
         | xargs \
         | awk '{split($0,a,":"); printf "%s:%s\n", a[1], a[2]}' )
   fi
   sed -i '' -E "s/(modification date:).*/\1  $MODDATE/g" "$FILE"
   sed -i '' -E "s/(mdate:).*/\1  $MODDATE/g" "$FILE"
}

/usr/local/bin/fswatch -0 --format="%p|%f" $FP | while read -d "" event; do
   [[ $event =~ ".DS_Store" ]] && continue
   [[ $event =~ "IsDir" ]] && continue 
   [[ ! $event =~ "Updated" ]] && continue
   
   # ignore anything that's not a Markdown file
   [[ ! $event =~ ".md" ]] && continue 
   
   # ignore file removal events
   [[ $event =~ "Removed" ]] && continue
   # ignore swap file bs
   [[ $event =~ ".swp" ]] && continue
   
   # Ignore what may be swap files that Obsidian uses
   UPDATED_FILE=$(echo "$event" | cut -d "|" -f1)
   [[ ! $event =~ ".!" ]] && update_mdate "$UPDATED_FILE"
done

I’ll try to explain the highlights of the above code. The main loop is around the fswatch invocation. I won’t go into depth with the --format parameter; but essentially we are looking for the file that has been altered and the event list.

Most of the remaining logic of this loop is to filter out unwanted events, directories, and files. For example:

[[ ! $event =~ "Updated" ]] && continue

ensures that only Updated events will be processed. The rest of these filters are fairly self-explanatory.

One additional feature of these event and file filters that does need to be mentioned is that it appears Obsidian writes some kind of temporary swap file before committing changes to the main file. These files have a naming convention which is just the main filename prefixed with “.!”, so the following logic only processes files that do not have this pattern:

# Ignore what may be swap files that Obsidian uses
UPDATED_FILE=$(echo "$event" | cut -d "|" -f1)
[[ ! $event =~ ".!" ]] && update_mdate "$UPDATED_FILE"

Updating the mdate

The logic for updating the modified date parameter is embedded in the Bash function update_mdate.

function update_mdate() {
   FILE="$1"
   if uname | grep -q "Darwin"; then
      MODDATE=$(stat -f "%Sm" -t "%Y-%m-%d %H:%M" "$FILE")
   else
      MODDATE=$(stat -f "-c %y" "$FILE" \
         | xargs \
         | awk '{split($0,a,":"); printf "%s:%s\n", a[1], a[2]}' )
   fi
   sed -i '' -E "s/(modification date:).*/\1  $MODDATE/g" "$FILE"
   sed -i '' -E "s/(mdate:).*/\1  $MODDATE/g" "$FILE"
}

Most of the complexity in the date-updating function is in handling stat differently depending on the platform. macOS uses the BSD version of stat but Linux uses the coreutils version. After parsing the last modified date from stat we use sed to “splice it into” our document. Some of my documents use mdate: and others have used modification date: so we handle both. I haven’t had the chance to do thorough testing of the Linux piece, but I believe it should work.

Then we just have to worry about how to keep the script running. On my macOS machine, I use LaunchControl to set it up as a User Agent. If you’re comfortable with launchd then you can set it up directly with the help of a GUI application.


  1. For example, this thread on the official Obsidian forums which discusses the issue with using dynamic queries in the YAML. One solution offered was to embedded the dynamic command in apostrophes: ‘<%+ tp.file.last_modified_date() %>’ This did not work in my case, nor did it work in the case of at least one other respondent. As of right now, I don’t think there’s a good solution, apart from the approach suggested in this article, if you want the frontmatter YAML to update dynamically. ↩︎

Week functions in Dataview plugin for Obsidian

There are a couple features of the Dataview plugin for Obsidian that aren’t documented and are potentially useful. For the start of the week, use date(sow) and for the end of the week date(eow). Since there’s no documentation as of yet, I’ll venture a guess that they are locale-dependendent. For me (in Canada), sow is Monday. Since I do my weekly notes on Saturday, I have to subtract a couple days to point to them.

Scraping Forvo pronunciations

Most language learners are familiar with Forvo, a site that allows users to download and contribute pronunciations for words and phrases. For my Russian studies, I make daily use of the site. In fact, to facilitate my Anki card-making workflow, I am a paid user of the Forvo API. But that’s where the trouble started. When the Forvo API works, it works OK, often extremely slow. But lately, it has been down more than up.

A regex to remove Anki's cloze markup

Recently, someone asked a question on r/Anki about changing and existing cloze-type note to a regular note. Part of the solution involves stripping the cloze markup from the existing cloze’d field. A cloze sentence has the form Play {{c1::studid}} games. or Play {{c1::stupid::pejorative adj}} games. To handle both of these cases, the following regular expression will work. Just substitute for $1. {{c\d::([^:}]+)(?:::+[^}])}} However, the Cloze Anything markup is different. It uses ( and ) instead of curly braces.

Anki: Insert the most recent image

I make a lot of Anki cards, so I’m on a constant quest to make the process more efficient. Like a lot of language-learners, I use images on my cards where possible in order to make the word or sentence more memorable. Process When I find an image online that I want to use on the card, I download it to ~/Documents/ankibound. A Hazel rule then grabs the image file and converts it to a .

Altering Anki's revlog table, or how to recover your streak

Anki users are protective of their streak - the number of consecutive days they’ve done their reviews. Right now, for example, my streak is 621 days. So if you miss a day for whatever reason, not only do you have to deal with double the number of reviews, but you also deal with the emotional toll of having lost your streak. You can lose your streak for one of several reasons.

A deep dive into my Anki language learning: Part III (Sentences)

Welcome to Part III of a deep dive into my Anki language learning decks. In Part I I covered the principles that guide how I setup my decks and the overall deck structure. In the lengthy Part II I delved into my vocabulary deck. In this installment, Part III, we’ll cover my sentence decks. Principles First, sentences (and still larger units of language) should eventually take precedence in language study. What help is it to know the word for “tomato” in your L2, if you don’t know how to slice a tomato, how to eat a tomato, how to grow a tomato plant?

A deep dive into my Anki language learning: Part II (Vocabulary)

In Part I of my series on my Anki language-learning setup, I described the philosophy that informs my Anki setup and touched on the deck overview. Now I’ll tackle the largest and most complex deck(s), my vocabulary decks. First some FAQ’s about my vocabulary deck: Do you organize it as L1 → L2 or as L2 → L1, or both? Actually, it’s both and more. Keep reading. Do you have separate subdecks by language level, or source, or some other characteristic?

A deep dive into my Anki language learning: Part I (Overview and philosophy)

Although I’ve been writing about Anki for years, it’s been in bits and pieces. Solving little problems. Creating efficiencies. But I realized that I’ve never taken a top-down approach to my Anki language learning system. So consider the post the launch of that overdue effort. Caveats A few caveats at the outset: I’m not a professional language tutor or pedagogue of any sort really. Much of what I’ve developed, I’ve done through trial-and-error, some intuition, and a some reading on relevant topics.