Regardless of how closely you adhere to the 20 rules for formating knowledge, there are cards that seem destined to leechdom. For me part of the problem is that with languages, straight-up vocabulary cards take words out of the rich context in which they exist in the wild. With my maturing collection of Russian decks, I recently started to go through these resistant cards and figure out why they are so difficult.
It’s always best to let Anki set intervals according to its view of your performance on testing. That said, there are times when directly altering the interval makes sense. For example, to build out a complete representation of the entire Russian National Corpus, I’m forced to enter vocabulary terms that should be obvious to even elementary Russian learners but which aren’t yet in my nearly 24,000 card database. Therefore, I’m entering these cards gradually.
Recently I hit an extra digit when setting up a custom new card session and was stuck with hundreds of new cards to review. Desparate to fix this, I started poking around the Anki collection SQLite database, I found the collection data responsible for the extra cards. In the col table, find the newToday key and you’ll find the extra card count expressed as a negative integer. Just change that to zero and you’ll be good.
There’s a phenomenon that verteran Anki users are familiar with - the so-called “Anki hell” or “ease hell.”
Origins of ease hell The descent into ease hell has to do with the way Anki handles correct and incorrect answers when it presents cards for review. Ease is a numerical score associated with every card in the database and represents a valuation of the difficulty of the card. By default, when cards graduate from the learning phase, an ease of 250% is applied to the card.
In Anki 2.1, it’s practically impossible to get plain text from the web into note fields. A solution (on macOS, at least)…
Yet another diversion to keep me from focusing on actually using Anki to learn Russian. I stumbled on the R programming language, a language that focuses on statistical analysis.
Here’s a couple snippets that begin to scratch the surface of what’s possible. Important caveat: I’m an R novice at best. There are probably much better ways of doing some of this…
Counting notes with a particular model type Here we’ll use R to do what we did previously with Python.
Continuing my series on accessing the Anki database outside of the Anki application environment, here’s a piece on accessing the note type model. You may wish to start here with the first article on accessing the Anki database. This is geared toward mac OS. (If you’re not on mac OS, then start here instead.)
The note type model Since notes contain flexible fields in Anki, the model for a note type is in JSON.
I previously wrote about accessing the Anki database using Python on mac OS. Extending that post, I’ll show how to work with a specific deck in this short post.
To use a named deck you’ll need its deck ID. Fortunately there’s a built-in method for finding a deck ID by name:
col = Collection(COLLECTION_PATH) dID = col.decks.id(DECK_NAME) Now in queries against the cards and notes tables we can apply the deck ID to restrict them to a certain deck.
Not long ago I ran across this post detailing a method for opening and inspecting the Anki database using Python outside the Anki application environment. However, the approach requires linking to the Anki code base which is inaccessible on mac OS since the Python code is packaged into a Mac app on this platform.
The solution I’ve found is inelegant; but just involves downloading the Anki code base to a location on your file system where you can link to it in your code.
For the last two years, I’ve been working through a 10,000 word Russian vocabulary ordered by frequency. I have a goal of finishing the list before the end of 2019. This requires not only stubborn persistence but an efficient process of collecting the information that goes onto my Anki flash cards.
My manual process has been to work from a Numbers spreadsheet. As I collect information about each word from several websites, I log it in this table.