Young children can exploit the syntactic context of a novel word to narrow down its probable meaning. This is syntactic bootstrapping. A learner that uses syntactic bootstrapping to foster lexical acquisition must first have identified the semantic information that a syntactic context provides. Based on the semantic seed hypothesis, children discover the semantic predictiveness of syntactic contexts by tracking the distribution of familiar words. We propose that these learning mechanisms relate to a larger cognitive model: the predictive processing framework. According to this model, we perceive and make sense of the world by constantly predicting what will happen next in a probabilistic fashion. We outline evidence that prediction operates within language acquisition, and show how this framework helps us understand the way lexical knowledge refines syntactic predictions, and how syntactic knowledge refines predictions about novel words' meanings. The predictive processing framework entails that learners can adapt to recent information and update their linguistic model. Here we review some of the recent experimental work showing that the type of prediction preschool children make from a syntactic context can change when they are presented with contrary evidence from recent input. We end by discussing some challenges of applying the predictive processing framework to syntactic bootstrapping and propose new avenues to investigate in future work.