Flatten airports in X-Plane

Some airports in X-Plane have terrain issues that can be quite entertaining.

This Delta 737-800 got lost in the maze of cargo ramps at PANC and was trying to taxi back to the terminal when it encountered a steep icy taxiway. It required 65% N1 just to get up the slope.

Clearly a fix is required. It turns out to be quite simple. In the global airports file apt.dat, find the offending airport. In this case, it’s PANC where its entry looks like:

1    149 0 0 PANC Ted Stevens Anchorage Intl
1302 city Anchorage
1302 country USA United States
1302 datum_lat 61.174155556
1302 datum_lon -149.998188889
1302 faa_code ANC
1302 gui_label 3D
1302 iata_code ANC
1302 icao_code PANC
1302 region_code PA
1302 state Alaska
1302 transition_alt 18000
...

To flatten the airport terrain, add the line 1302 flatten 1 after the airport header, so that the block now looks like:

1    149 0 0 PANC Ted Stevens Anchorage Intl
1302 flatten 1
1302 city Anchorage
1302 country USA United States
1302 datum_lat 61.174155556
1302 datum_lon -149.998188889
1302 faa_code ANC
1302 gui_label 3D
1302 iata_code ANC
1302 icao_code PANC
1302 region_code PA
1302 state Alaska
1302 transition_alt 18000
...

But what about when X-Plane is updated; the global airport apt.dat file gets overwritten. My workaround is to apply the fix programmatically after update:

#!/bin/bash

APTDAT="/Users/$(whoami)/X-Plane 12/Global Scenery/\
Global Airports/Earth nav data/apt.dat"

perl -pi -e '$_ .= qq(1302 flatten 1\n) if /PANC\sTed.+/' "$APTDAT"

I suppose, I could create a custom airport directory for the airports that I flatten, but then I’d lose out on any future modificationsto those airports that Laminar Research publishes in its updates. I’ll have to think about that one.

Hazel deletes custom file icons, and a workaround

I use Hazel extensively for automating file management tasks on my macOS systems. Recently I found that Hazel aggressively matches an invisible system file that appears whenever you use a custom file or folder icon. I’ll describe the problem and present a workaround.

In a handful of directories, I have a rule that prevents users (me) from adding certain file types. So the rule just matches any file that is not an image, for example, and deletes it. This is all well and good until to try to add a custom icon to this directory. Since the file Icon? that gets created as a result is not an image, the Hazel rule dutifully deletes it.

In my first try to fix it, I tried to match the filename on Icon? but that didn’t work because the actual name is Icon$'\r', Thinking this special character \r would be too hard to match, I moved onto plan B.

Plan B in this case is to match on a particular bit of file metadata, the kMDItemDisplayName key. This is what Finder displays, but is not the actual file name, thus explaining Hazel’s inability to match it properly. So if we run:

mdls -n kMDItemDisplayName Icon$'\r'
# prints kMDItemDisplayName = "Icon?"

Now, we just need to strip the double quotes and match. So before we put this into a rule, with a Passes shell script criterion, let’s think about what we’re trying to do and how the logic of this criterion works. The criterion is considered positive when it returns a 0 exit code. But in our case, we are setting up a rule that says “If the file is not an image and is not a custom icon file, then delete it.” So if the file matches Icon? then we need to return 1 to negate the second clause of this rule. Here’s how to do it:

R=$(mdls -n kMDItemDisplayName "$1" | cut -d ' ' -f 3 | tr -d \")
R=$(echo "$R" | tr '?' 'x')
[[ "$R" =~ ^Iconx$ ]] && exit 1 || exit 0

Explanation:

  1. cut -d ' ' -f 3 cuts the result of the previous command into fields separated by the space character and returns the third field of that list.
  2. tr -d \" strips the quotes from the resulting string.
  3. In the next line tr '?' 'x' changes all ? characters to x because that makes regex matching in the next line easier. Bash regular expressions are quite limited, so it’s easier this way.

There may be a different/better way, but this works!

AwesomeTTS Anki add-on: Use Amazon Polly

As its name implies, the AwesomeTTS Anki add-on is awesome. It’s nearly indispensable for language learners.

You can use it in one of two ways:

  1. Subscribe on your own to the text-to-speech services that you plan to use and add those credentials to AwesomeTTS. (à la carte)
  2. Subscribe to the AwesomeTTS+ service and gain access to these services. (prix fixe)

Because I had already subscribed to Google and Azure TTS before AwesomeTTS+ came on the scene, there was no reason for me to pay for the comprehensive prix fixe option. Furthermore, since I’ve never gone above the free tier on any of these services, it makes no sense for me to pay for something I’m already getting for free. For others, the convience of a one-stop-shopping experience probably makes the AwesomeTTS+ service worthwhile.

Using fswatch to dynamically update Obsidian documents

Although I’m a relative newcomer to Obsidian, I like what I see, especially the templating and data access functionality - both that provided natively and through the Templater and Dataview plugins.

One missing piece is the ability to dynamically update the YAML-formatted metadata in the frontmatter of Obsidian’s Markdown documents. Several threads on both the official support forums and on r/ObsidianMD have addressed this; and there seems to be no real solution.1 None proposed solution - mainly view Dataview inline queries or Templater dynamic commands - seems to work consistently.

Week functions in Dataview plugin for Obsidian

There are a couple features of the Dataview plugin for Obsidian that aren’t documented and are potentially useful.

For the start of the week, use date(sow) and for the end of the week date(eow). Since there’s no documentation as of yet, I’ll venture a guess that they are locale-dependendent. For me (in Canada), sow is Monday. Since I do my weekly notes on Saturday, I have to subtract a couple days to point to them.

Scraping Forvo pronunciations

Most language learners are familiar with Forvo, a site that allows users to download and contribute pronunciations for words and phrases. For my Russian studies, I make daily use of the site. In fact, to facilitate my Anki card-making workflow, I am a paid user of the Forvo API. But that’s where the trouble started.

When the Forvo API works, it works OK, often extremely slow. But lately, it has been down more than up. In an effort to patch my workflow and continue to download Russian word pronunciations, I wrote this little scraper. I’d prefer to use the API, but experience has shown now that the API is slow and unreliable. I’ll keep paying for the API access, because I support what the company does. And as often as not when a company offers a free service, it’s likely to be involved in surveillance capitalism. So I’d rather companies offer a reliable product at a reasonable price.

A regex to remove Anki's cloze markup

Recently, someone asked a question on r/Anki about changing and existing cloze-type note to a regular note. Part of the solution involves stripping the cloze markup from the existing cloze’d field. A cloze sentence has the form Play {{c1::studid}} games. or Play {{c1::stupid::pejorative adj}} games.

To handle both of these cases, the following regular expression will work. Just substitute for $1.

\{\{c\d::([^:\}]+)(?:::+[^\}]*)*\}\}

However, the Cloze Anything markup is different. It uses ( and ) instead of curly braces. If we want to flexibly remove both the standard and Cloze Anything markup, then our pattern would look like:

Anki: Insert the most recent image

I make a lot of Anki cards, so I’m on a constant quest to make the process more efficient. Like a lot of language-learners, I use images on my cards where possible in order to make the word or sentence more memorable.

Process

When I find an image online that I want to use on the card, I download it to ~/Documents/ankibound. A Hazel rule then grabs the image file and converts it to a .webp file with relatively low quality and a maximum horizontal dimension of 200px. The size and low quality allow me to store lots of images without overwhelming storage capacity, or more importantly, resulting in long synchronization times.

Altering Anki's revlog table, or how to recover your streak

Anki users are protective of their streak - the number of consecutive days they’ve done their reviews. Right now, for example, my streak is 621 days. So if you miss a day for whatever reason, not only do you have to deal with double the number of reviews, but you also deal with the emotional toll of having lost your streak.

You can lose your streak for one of several reasons. You could have simply been lazy. You may have forgotten that you didn’t do your Anki. Or travel across timezones put you in a situation where Anki’s clock and your clock differ. Others have described a procedure for resetting the computer’s clock as a way of recovering a lost streak. It apparently works though I haven’t tried it. Instead I’ll focus on a technique that involves working directly with the Anki database.

A deep dive into my Anki language learning: Part III (Sentences)

Welcome to Part III of a deep dive into my Anki language learning decks. In Part I I covered the principles that guide how I setup my decks and the overall deck structure. In the lengthy Part II I delved into my vocabulary deck. In this installment, Part III, we’ll cover my sentence decks.

Principles

First, sentences (and still larger units of language) should eventually take precedence in language study. What help is it to know the word for “tomato” in your L2, if you don’t know how to slice a tomato, how to eat a tomato, how to grow a tomato plant? Focus on larger units of language increases your success rate in integrating vocabulary into daily use.