Our capacity to read words that contain letter transpositions, as illustrated in the email about research at Cambridge University, is a hallmark of flexible orthographic processing. We show that baboons, previously trained to discriminate words from nonwords, pass the Cambridge University test, that is, they make more false positive errors on nonwords created by transposing two letters of a word they know compared with nonwords created by substituting two letters of the same word with different letters. In order to shed light on the underlying mechanisms, we trained artificial neural networks to classify the same words and nonwords as the baboons. Networks were given pixels, single letters, or letter combinations as input. All networks learned to discriminate words from nonwords, but only the letter combination model simulated the transposed-letter effect. Our results suggest that baboons discriminate words from nonwords using flexible orthographic codes based on letter combinations.