Many computational models of visual word recognition and reading postulate a central role for phonology. None, however, has successfully simulated one key phenomenon associated with fast-acting phonological influences during word recognition: masked phonological priming (e.g., bloo primes BLUE better than blai primes BLUE). The tricky issue for computational models is not only to simulate such masked phonological priming effects, but at the same time to correctly read aloud irregular words. This double challenge constitutes a new benchmark phenomenon: the fast-phonology test. It has been previously shown that the dual route model of reading aloud (DRC) does not pass the fast-phonology test, unless it is assumed that lexical decisions are always made on the basis of lexical phonological activation. Here we show that the Bimodal Interactive Activation Model (BIAM), an extension of the interactive activation model, can pass the fast-phonology test, while maintaining the ability to discriminate between words and nonwords on the basis of orthographic activation alone. The BIAM achieves this by virtue of implementing a fast parallel mapping of letters onto input phonemes rather than output phonemes as in DRC. It is argued that the BIAM provides an improved architecture for a general model of visual word recognition and reading.