The answer to the headline question is, of course, since that's exactly what an AI system made for interacting with the real world is designed to do: after some interactions, it begins to prejudge upcoming interactions based on their similarity to prior interactions, rather than treating each one as a completely new circumstance (to which it would react randomly). Doing the latter would be called artificial naivety, rather than artificial intelligence. Recognizable analogues of human prejudices will arise when those prejudices have a statistical basis, the more rapidly and more strongly, the stronger the statistical basis for the prejudice, as they will if the training data is unwittingly (or maliciously) biased.