hrplot.png

Yet another diversion to keep me from focusing on actually using Anki to learn Russian. I stumbled on the R programming language, a language that focuses on statistical analysis.

Here’s a couple snippets that begin to scratch the surface of what’s possible. Important caveat: I’m an R novice at best. There are probably much better ways of doing some of this…

Counting notes with a particular model type

Here we’ll use R to do what we did previously with Python.

First load some of the libraries we’ll need:

1
2
3
library(rjson)
library(RSQLite)
library(DBI)

Next we’ll connect to the database and extract the model information:

1
2
3
4
5
6
7
# connect to the Anki database
dbpath <- "path to your collection"
con = dbConnect(RSQLite::SQLite(),dbname=dbpath)

# get information about the models
modelInfo <- as.character(dbGetQuery(con,'SELECT models FROM col'))
models <- fromJSON(modelInfo)

Since the model information is stored as JSON, we’ll need to parse the JSON to build a data frame that we can use to extract the model ID that we’ll need.

1
2
3
4
5
6
7
names <- c()
mid <- names(models)
for(i in 1:length(mid))
{
names[i] <- models[[mid[i]]]$name
}
models <- data.frame(cbind(mid,names))

Next we’ll extract the model ID (mid) from this data frame so that we can find all of the notes with that model ID:

1
2
3
4
5
verbmid <- as.numeric(as.character(models[models$names=="Русский - глагол","mid"]))

# query the notes database for notes with this model
query <- paste("SELECT COUNT(id) FROM notes WHERE mid =",verbmid)
res <- as.numeric(dbGetQuery(con,query))

And of course, close the connection to the Anki SQLite database:

1
dbDisconnect(con)

As of this writing, res tells me I have 702 notes with the verb model types (named “Русский - глагол” in my collection.)

Counting hours per month in Anki

Ever wonder how many hours per month you spend reviewing in Anki? Here’s an R program that will grab review time information from the database and plot it for you. I ran across the original idea in this blog post by Gene Dan, but did a little work on the x-axis scale to get it to display correctly.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
library(RSQLite)
library(DBI)
library(rjson)
library(anytime)
library(sqldf)
library(zoo)
library(ggplot2)

dbpath <- "/Users/alan/Library/Application Support/Anki2/Alan - Russian/collection.anki2"
con = dbConnect(RSQLite::SQLite(),dbname=dbpath)
#get reviews
rev <- dbGetQuery(con,'select CAST(id as TEXT) as id
, CAST(cid as TEXT) as cid
, time
from revlog')


cards <- dbGetQuery(con,'select CAST(id as TEXT) as cid, CAST(did as TEXT) as did from cards')

#Get deck info - from the decks field in the col table
deckinfo <- as.character(dbGetQuery(con,'select decks from col'))
decks <- fromJSON(deckinfo)

names <- c()
did <- names(decks)
for(i in 1:length(did))
{
names[i] <- decks[[did[i]]]$name
}

decks <- data.frame(cbind(did,names))
#decks$names <- as.character(decks$names)

cards_w_decks <- merge(cards,decks,by="did")
#Date is UNIX timestamp in milliseconds, divide by 1000 to get seconds
rev$revdate <- as.yearmon(anydate(as.numeric(rev$id)/1000))

# Assign deck info to reviews
rev_w_decks <- merge(rev,cards_w_decks,by="cid")
time_summary <- sqldf("select revdate, sum(time) as Time from rev_w_decks group by revdate")
time_summary$Time <- time_summary$Time/3.6e+6

ggplot(time_summary,aes(x=revdate,y=Time))+geom_bar(stat="identity",fill="#d93d2a")+
scale_x_yearmon()+
ggtitle("Hours per Month") +
xlab("Review Date") +
ylab("Time (hrs)") +
theme(axis.text.x=element_text(hjust=2,size=rel(1))) +
theme(plot.title=element_text(size=rel(1.5),vjust=.9,hjust=.5)) +
guides(fill = guide_legend(reverse = TRUE))

dbDisconnect(con)

You should get a plot like this the one at the top of the post.

I’m anxious to learn more about R and apply it to understanding my performance in Anki.

Since one of the cornerstones of my approach to learning the Russian language has been to track how many words I’ve learned and their frequencies, I was intrigued by reading the following statistics today:

  • The 15 most frequent words in the language account for 25% of all the words in typical texts.
  • The first 100 words account for 60% of the words appearing in texts.
  • 97% of the words one encounters in a ordinary text will be among the first 4000 most frequent words.

In other words, if you learn the first 4000 words of a language, you’ll be able to understand nearly everything.

Source - Five Cornerstones for Second-Language Acquisition - the Neurophysiological Opportunist’s Way - Olle Kjellin, M.D., Ph.D. but originally from The Cambridge Encyclopedia of Language (Crystal, 1995)

Continuing my series on accessing the Anki database outside of the Anki application environment, here’s a piece on accessing the note type model. You may wish to start here with the first article on accessing the Anki database. This is geared toward mac OS. (If you’re not on mac OS, then start here instead.)

The note type model

Since notes contain flexible fields in Anki, the model for a note type is in JSON. The best guess definition of the JSON is:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
{
"css": "CSS, shared for all templates",
"did":
"Long specifying the id of the deck that cards are added to by default",
"flds": [
"JSONArray containing object for each field in the model as follows:",
{
"font": "display font",
"media": "array of media. appears to be unused",
"name": "field name",
"ord": "ordinal of the field - goes from 0 to num fields -1",
"rtl": "boolean, right-to-left script",
"size": "font size",
"sticky": "sticky fields retain the value that was last added \
when adding new notes"

}

],

"id": "model ID, matches cards.mid",
"latexPost": "String added to end of LaTeX expressions",
"latexPre": "preamble for LaTeX expressions",
"mod": "modification time in milliseconds",
"name": "model name",
"req": [
"Array of arrays describing which fields are required \
for each card to be generated",

[
"array index, 0, 1, ...",
"? string, all",
"another array",
["appears to be the array index again"]
]
],

"sortf": "Integer specifying which field is used for sorting (browser)",
"tags": "Anki saves the tags of the last added note to the current model",
"tmpls": [
"JSONArray containing object of CardTemplate for each card in model",
{
"afmt": "answer template string",
"bafmt": "browser answer format: used for displaying answer in browser",
"bqfmt": "browser question format: \
used for displaying question in browser"
,

"did": "deck override (null by default)",
"name": "template name",
"ord": "template number, see flds",
"qfmt": "question format string"
}

],

"type": "Integer specifying what type of model. 0 for standard, 1 for cloze",
"usn": "Update sequence number: used in same way as other usn vales in db",
"vers": "Legacy version number (unused)"
}

Our goal today is to count all of the notes that have a given note type. Fortunately, there’s a built-in method for this:

1
verbModel = col.models.byName(u'Русский - глагол')

Here we find the model object (a Python dictionary) named ‘Русский - глагол’ (that’s Russian verb, by the way.) To access its id:

1
modelID = verbModel['id']

Now we just have to count:

1
2
3
4
query = """SELECT COUNT(id) from notes WHERE mid = {}""".format(verbModel['id'])
verbNotes = col.db.scalar(query)

print 'There are {:.5g} verb notes.'.format(verbNotes)

And that’s it for this little adventure in the Anki database.

See also:

I previously wrote about accessing the Anki database using Python on mac OS. Extending that post, I’ll show how to work with a specific deck in this short post.

To use a named deck you’ll need its deck ID. Fortunately there’s a built-in method for finding a deck ID by name:

1
2
col = Collection(COLLECTION_PATH)
dID = col.decks.id(DECK_NAME)

Now in queries against the cards and notes tables we can apply the deck ID to restrict them to a certain deck. For example, to find all of the cards currently in the learning stage:

1
2
3
4
query = """SELECT COUNT(id) FROM cards where type = 1 AND did = dID"""
learningCards = col.db.scalar(query)

print 'There are {:.5g} learning cards.'.format(learningCards)

And close the collection:

1
col.close()

See also:

Not long ago I ran across this post detailing a method for opening and inspecting the Anki database using Python outside the Anki application environment. However, the approach requires linking to the Anki code base which is inaccessible on mac OS since the Python code is packaged into a Mac app on this platform.

The solution I’ve found is inelegant; but just involves downloading the Anki code base to a location on your file system where you can link to it in your code. You can find the Anki code here on github.

Once that’s done, you’re ready to load an Anki collection. First, the preliminaries:

1
2
3
4
5
6
7
8
9
10
#!/usr/bin/python

import sys

# paths
ANKI_PATH = 'path to where you downloaded the anki codebase'
COLLECTION_PATH = "path to the Anki collection"

sys.path.append(ANKI_PATH)
from anki import Collection

Now we’re ready to open the collection:

1
col = Collection(COLLECTION_PATH)

And execute a simple query to print out the total number of cards in the collection:

1
2
3
4
query = """SELECT COUNT(id) from cards"""
totalCards = col.db.scalar(query)

print 'There are {:.5g} total cards.'.format(totalCards)

Then close the collection:

1
col.close()

That’s it. Ideally, we’d be able to link to the Anki code bundled with the Mac application. Maybe there’s a way. In the meanwhile, here’s the entire little app:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#!/usr/bin/python

import sys

# paths
ANKI_PATH = '/Users/alan/Documents/dev/projects/PersonalProjects/anki'
COLLECTION_PATH = "/Users/alan/Documents/Anki/Alan - Russian/collection.anki2"

sys.path.append(ANKI_PATH)
from anki import Collection

col = Collection(COLLECTION_PATH)

query = """SELECT COUNT(id) from cards"""
totalCards = col.db.scalar(query)

print 'There are {:.5g} total cards.'.format(totalCards)

col.close()

For the last two years, I’ve been working through a 10,000 word Russian vocabulary ordered by frequency. I have a goal of finishing the list before the end of 2019. This requires not only stubborn persistence but an efficient process of collecting the information that goes onto my Anki flash cards.

My manual process has been to work from a Numbers spreadsheet. As I collect information about each word from several websites, I log it in this table.

numbers-sheet-ru.png

For each word, I do the following:

  1. From Open Russian I obtain an example sentence or two.
  2. From Wiktionary I obtain, the definition, more example phrases, any particular grammatical information I need, and audio of the pronunciation if it is available. I also capture the URL from this site onto my flash card.
  3. From the Russian National Corpus I capture the frequency according to their listing in case I want to reorder my frequency list in the future.

This involves lots of cutting, pasting and tab-switching. So I devised an automated approach to loading up this information. This most complicated part was downloading the Russian pronunciation from Wiktionary. I did this with Python.

Downloading pronunciation files from Wiktionary

1
2
3
4
5
6
7
8
9
class WikiPage(object):
"""Wiktionary page - source for the extraction"""
def __init__(self, ruWord):
super(WikiPage, self).__init__()
self.word = ruWord
self.baseURL = u'http://en.wiktionary.org/wiki/'
self.anchor = u'#Russian'
def url(self):
return self.baseURL + self.word + self.anchor

First, we initialize a WikiPage object by building the main page URL using the Russian word we want to capture. We can capture the page source and look for the direct link to the audio file that we want:

1
2
3
4
5
def page(self):
return requests.get(self.url())
def audioLink(self):
searchObj = re.search("commons(\\/.+\\/.+\\/Ru-.+\\.ogg)", self.page().text, re.M)
return searchObj.group(1)

The function audioLink returns a link to the .ogg file that we want to download. Now we just have to download the file:

1
2
3
4
5
6
7
8
9
10
def downloadAudio(self):
path = join(expanduser("~"),'Downloads',self.word + '.ogg')
try:
mp3file = urllib2.urlopen(self.fullAudioLink())
except AttributeError:
print "There appears to be no audio."
notify("No audio","Wiktionary has no pronunciation", "Pronunciation is not available for download.", sound=True)
else:
with open(path,'wb') as output:
output.write(mp3file.read())

Now to kick-off the process, we just have to get the word from the mac OS pasteboard, instantiate a WikiPage object and call downloadAudio on it:

1
2
3
4
5
6
word = xerox.paste().encode('utf-8')
wikipage = WikiPage(word)
if DEBUG:
print wikipage.url()
print wikipage.fullAudioLink()
wikipage.downloadAudio()

If you’d like to see the entire Python script, the gist is here.

Automating Google Chrome

Next we want to automate Chrome to pull up the word in the reference websites. We’ll do this in AppleScript.

1
2
3
set searchTerm to the clipboard as text
set openRussianURL to "https://en.openrussian.org/ru/" & searchTerm
set wiktionaryURL to "https://en.wiktionary.org/wiki/" & searchTerm & "#Russian"

There we grab the word off the clipboard and build the URL for both sites. Next we’ll look for a tab that contains the Russian National Corpus site and execute a page search for our target word. That way I can easily grab the word frequency from the page.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
tell application "Google Chrome" to activate

-- initiate the word find process in dict.ruslang.ru
tell application "Google Chrome"
-- find the tab with the frequency list
set i to 0
repeat with t in (every tab of window 1)
set i to i + 1
set searchURLText to (URL of t) as text
if searchURLText begins with "http://dict.ruslang.ru/" then
set active tab index of window 1 to i
exit repeat
end if
end repeat
end tell

delay 1

tell application "System Events"
tell process "Google Chrome"
keystroke "f" using command down
delay 0.5
keystroke "V" using command down
delay 0.5
key code 36
end tell
end tell

Then we need to load the word definition pages using the URLs that we built earlier:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
-- load word definitions
tell application "Google Chrome"
activate
set i to 0
set tabList to every tab of window 1
repeat with theTab in tabList
set i to i + 1
set textURL to (URL of theTab) as text
-- load the word in open russian
if textURL begins with "https://en.openrussian.org" then
set URL of theTab to openRussianURL
end if
-- load the word in wiktionary
if textURL begins with "https://en.wiktionary.org" then
set URL of theTab to wiktionaryURL
-- make the wiktionary tab the active tab
set active tab index of window 1 to i
end if
end repeat
end tell

Finally, using do shell script we can fire off the Python script to download the audio. Actually, I have the AppleScript do that first to allow time to process the audio as I’ve described previously. Finally, I create a Quicksilver trigger to start the entire process from a single keystroke.

Granted, I have a very specific use case here, but hopefully you’ve been able to glean something useful about process automation of Chrome and using Python to download pronunciation files from Wiktionary. Cheers.

I wrote a piece previously about using JavaScript in Anki cards. Although I haven’t found many uses for employing this idea, it does come up from time-to-time including a recent use-case I’m writing about now.

After downloading a popular French frequency list deck for my daughter to use, I noticed that it omits the gender of nouns in the French prompt. In school, I was always taught to memorize the gender along with the noun. For example, when you memorize the word for law, “loi” you should mermorize it with either the definite article “la” or the indefinite article “une” so that the feminine gender of the noun is inseparable from the noun itself. But this deck has only the noun prompt and I was afraid that my daughter would fail to memorize the noun’s gender. JavaScript to the rescue.

Since the gender is encoded in a field, we can capitalize on that to insert the right article. My preference is to use the definite articles “le” or “la” where possible. But it gets increasingly complex from there. Nouns that begin with a vowel such as “avocat” require “l’avocat” which obscures the gender. In that case, I’d prefer the indefinite article “un avocat”. Then there’s the “h”. Most words beginning with “h” behave like those with vowels. But some words have h aspiré. With those words, we keep the full definite article without the apostrophe.

So we start with a couple easy preliminaries, such as detecting vowels:

1
2
3
4
5
//	returns true if the character
// is a vowel
function vowelTest(s) {
return (/^[aeiou]$/i).test(s);
}

Now we turn our attention to whether a words would need an apostrophe with the definite article. I’m not actually going to use the apostrophe. Instead we’ll fall back to the indefinite article “un/une” in this case.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
// returns true if the word would need
// an apostrophe if used with the
// definite article
function needsApostrophe(str) {
if(str[0]=='h') {
// h words that do not need apostrophe
var aspire = ["hache","hachisch","haddock","haïku",
"haillon","haine","hall",
"halo","halte","hamac",
"hamburger","hameau","hammam",
"hampe","hamster","hanche",
"hand-ball","handicap","hangar",
"harde","hareng","hargne",
"haricot","harpail","harpon",
"hasard","hauteur","havre","hère",
"hérisson","hernie","héron",
"héros","herse","hêtre",
"hiatus","hibou","hic",
"hickory","hiérarchie","hiéroglyphe",
"hobby","Hollande","homard",
"Hongrie","honte","hoquet",
"houe","houle","hooligan",
"houppe","housse","houx",
"houblot","huche","huguenot"
];
return (aspire.indexOf(str) == -1);
}
return vowelTest(str[0]);
}

Now we can wrap this up into a function that adds an article, either definite or indefinite to the noun:

1
2
3
4
5
6
7
//	adds either definite or indefinite article
function addArticle(str,genderstr) {
if( needsApostrophe(str) ) {
return (genderstr == "nm" ) ? "un " + str : "une " + str;
}
return (genderstr == "nm") ? "le " + str : "la " + str;
}

The first step is to make sure that the part of speech field is visible to the script. We do this by inserting it into the card template.

1
<span id="pos">{{Part of Speech}}</span>

Don’t worry, we’ll hide it in a minute.

Then we can obtain the contents of the field and add the gender-specific article accordingly.

1
2
3
4
var content = document.getElementById("pos").innerHTML;
var fword = document.getElementsByClassName("frenchwordless")[0].innerHTML;
artword = addArticle(fword,content);
document.getElementsByClassName("frenchwordless")[0].innerHTML=artword;

And we can hide the gender sentinel field:

1
var content = document.getElementById("pos").style.visibility = "hidden";

Ideally, French Anki decks would be constructed in such a way that the gender is embedded in the noun to be memorized, but with a little creative use of JavaScript, we can retool it on-the-fly.

ghgraph.jpg

Spurious sensor data can wreak havoc in an otherwise finely-tuned home automation system. I use temperature data from an Aeotech Multisensor 6 to monitor the environment in our greenhouse. Living in Canada, I cannot rely solely on passive systems to maintain the temperature, particularly at night. So, using the temperature and humidity measurements transmitted back to the controller over Z-wave, I control devices inside the greenhouse that heat and humidify the environment.

But spurious temperature and humidity data mean that I often falsely trigger the heating and humidification devices. After dealing with this for several weeks, I came up with a workable solution that can be applied to other sensor data. It’s important to note that the solution I developed uses time-averaging of the data. If it’s important to react to the data quickly, then the averaging window needs to be shortened or you may need to look for a different solution.

I started by trying to ascertain exactly what the spurious temperature data were. It turns out that usually the spurious data points were 0’s. But occasionally odd non-zero data would crop up. In all cases the values were lower than the actual value and always by a lot (i.e. 40 or more degrees F difference.)

In most cases with Indigo, for simplicity, we simply trigger events based on absolute values. When spurious data are present, for whatever reason, false triggers will result. My approach takes advantage of the fact that Indigo keeps a database of sensor data. By default it logs these data points to a SQLite database. This database is at /Library/Application Support/Perceptive Automation/Indigo 7/Logs/indigo_history.sqlite. I used the application Base a GUI SQLite client on macOS to explore the structure a bit. Each device has a table named device_history_xxxxxxxx. You simply need to know the device identifier which you can easily find in the Indigo application. Exploring the table, you can see how the data are stored.

base.jpg

To employ a strategy of time-averaging and filtering the data, I decided to pull the last 10 values from the SQLite database. As I get data about every 30 seconds from the sensor, my averaging window is about 5 minutes. It turns out this is quite easy:

1
2
3
4
5
6
7
8
9
10
11
12
13
import sqlite3

SQLITE_PATH = '/Library/Application Support/Perceptive Automation/ \
Indigo 7/Logs/indigo_history.sqlite'

SQLITE_TN = 'device_history_114161618'
SQLITE_TN_ALIAS = 'gh'

conn = sqlite3.connect(SQLITE_PATH)
c = conn.cursor()
SQL = "SELECT gh.sensorvalue from {tn} as {alias} \
ORDER BY ts DESC LIMIT 10".format(tn=SQLITE_TN,alias=SQLITE_TN_ALIAS)

c.execute(SQL)
all_rows = c.fetchall()

Now all_rows contains a list of single-item tuples that we need to compact into a list. In the next step, I filter obviously spurious values and compact the list of tuples into a list of values:

1
tempsF = filter(lambda a: a > 1, [i[0] for i in all_rows])

But some spurious data remains. Remember that many of the errant values are 0.0 but some are just lower than the actual values. To do this, I create a list of the differences from one value to the next and search for significant deviations (5°F in this case.) Having found which value creates the large difference, I exclude it from the list.[1]

1
2
3
4
5
6
7
8
diffs = [abs(x[1]-x[0]) for x in zip(tempsF[1:],tempsF[:-1])]
idx = 0
for diff in diffs:
if diff > 5:
break;
else:
idx = idx+1
filtTempsF = tempsF[:idx+1] + tempsF[idx+2:]

Finally, since it’s a moving average I need to actually average the data.

1
avgTempsF = reduce(lambda x,y : x + y, filtTempsF) / len(filtTempsF)

In summary, this gives me a filtered, time-averaged dataset that excludes spurious data. For applications that are very time-sensitive, this approach won’t work as is. But for most environmental controls, it’s a workable solution to identifying and filtering wonky sensor data.

For reference, the entire script follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
#	Update the greenhouse temperature in degrees C
# The sensor reports values in F, so we will update
# the value to see whenever the primary data has any change.

import sqlite3

# device and variable definitions
IDX_CURRENT_TEMP = 1822850463
IDX_FORMATTED = 1778207310
DEV_GH_TEMP = 114161618
SQLITE_PATH = '/Library/Application Support/Perceptive Automation/Indigo 7/Logs/indigo_history.sqlite'
SQLITE_TN = 'device_history_114161618'
SQLITE_TN_ALIAS = 'gh'

DEBUG_GH = True

def F2C(ctemp):
return round((ctemp - 32) / 1.8,1)

def CDeviceTemp(deviceID):
device = indigo.devices[deviceID]
tempF = device.sensorValue
return F2C(tempF)

def movingAverageF():
conn = sqlite3.connect(SQLITE_PATH)
c = conn.cursor()
SQL = "SELECT gh.sensorvalue from {tn} as {alias} ORDER BY ts DESC LIMIT 10".format(tn=SQLITE_TN,alias=SQLITE_TN_ALIAS)
c.execute(SQL)
all_rows = c.fetchall()
tempsF = filter(lambda a: a > 1, [i[0] for i in all_rows])
diffs = [abs(x[1]-x[0]) for x in zip(tempsF[1:],tempsF[:-1])]
idx = 0
for diff in diffs:
if diff > 5:
break;
else:
idx = idx+1
filtTempsF = tempsF[:idx+1] + tempsF[idx+2:]
avgTempsF = reduce(lambda x,y : x + y, filtTempsF) / len(filtTempsF)
return avgTempsF

def movingAverageC():
return F2C(movingAverageF())

# compute moving average
avgC = F2C(movingAverageF())

# current greenhouse temperature in degrees C
ghTempC = F2C(indigo.devices[DEV_GH_TEMP].sensorValue)
indigo.server.log("GH temp: raw={0}F, filtered moving avg={1}C".format(ghTempC,avgC))

# update the server variables (°C temp and formatted string)
indigo.variable.updateValue(IDX_CURRENT_TEMP,value=unicode(avgC))
indigo.variable.updateValue(IDX_FORMATTED, value="{0}°C".format(avgC))

  1. As I was preparing this post, I realized that it this approaches misses the possibility of a dataset having more than one spurious data point. Empirically, I did not notice any occurrence of that, but it's possible. I have to account for that in the future.

With Trump the usual advice of “Follow the money.” doesn’t work because Congress refuses to force him to disclose his conflicts of interest. As enormous and material as those conflicts must be, I’m just going to focus on what I can see with my own eyes, the man’s apparent intent.

In his public life, Donald Trump has never done anything that did not personally and directly benefit him. Most of us, as we go through life, assemble a collection of acts that are variously self-serving and other-serving. This is the way of life. Normal life. With Trump, not so. Even his meager philanthropic acts are tainted with controversy. The man simply cannot act in sacrificial way. He is incurable.[1]

As a corollary, when considering his dismissal of FBI Director Comey yesterday, until a special prosecutor is appointed, I plan to apply that principle. Since Trump acts only in his own personal best interest, I’m going to assume that in firing Mr. Comey, he is personally benefitting from it.

trumpclassact.jpg

Since the evidence that Trump’s concern was over the Russia investigation, it’s safest to presume the firing was about the Russia investigation notwithstanding the feeble excuses of his staff who were caught off-guard by the event.

We would all do well to re-read Masha Gessen’s piece in the New York Review of Books, “Autocracy: Rules for Survival.” Her Rule #1: “Believe the autocrat. He means what he says.” remains applicable. If Trump is fuming about the Russia investigation, he probably fired the very man investigating his Administration’s ties to Russia because of it.


  1. In a campaign event in Fort Dodge, Iowa on November 12, 2015, Trump claimed that rival Ben Carson was "pathological" and that "...if you're pathological, there's no cure for that, folks, okay? There's no cure for that." Since Trump's own psychopathology is widely questioned, one wonders if he, too, is incurable. Given that narcissistic personality disorder is almost certainly among the potential diagnoses, he probably is incurable.